skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Wei, Chuheng"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Driver State Monitoring (DSM) is paramount for improving driving safety for both drivers of ego-vehicles and their surrounding road users, increasing public trust, and supporting the transition to autonomous driving. This paper introduces a Transformer-based classifier for DSM using an in-vehicle camera capturing raw Bayer images. Compared to traditional RGB images, we opt for the original Bayer data, further employing a Transformer-based classification algorithm. Experimental results prove that the accuracy of the Bayer Color-filled type images is only 0.61% lower than that of RGB images. Additionally, the performance of Bayer data is closely comparable to RGB images for DSM purposes. However, utilizing Bayer data can offer potential advantages, including reduced camera costs, lower energy consumption, and shortened Image Signal Processing (ISP) time. These benefits will enhance the efficacy of DSM systems and promote their widespread adoption. 
    more » « less
    Free, publicly-accessible full text available December 11, 2025
  2. : In the challenging realm of object detection under rainy conditions, visual distortions significantly hinder accuracy. This paper introduces Rain-Adapt Faster RCNN (RAF-RCNN), an innovative end-to-end approach that merges advanced deraining techniques with robust object detection. Our method integrates rain removal and object detection into a single process, using a novel feature transfer learning approach for enhanced robustness. By employing the Extended Area Structural Discrepancy Loss (EASDL), RAF-RCNN enhances feature map evaluation, leading to significant performance improvements. In quantitative testing of the Rainy KITTI dataset, RAF-RCNN achieves a mean Average Precision (mAP) of 51.4% at IOU [0.5, 0.95], exceeding previous methods by at least 5.5%. These results demonstrate RAF-RCNN's potential to significantly enhance perception systems in intelligent transportation, promising substantial improvements in reliability and safety for autonomous vehicles operating in varied weather conditions. 
    more » « less
  3. A significant challenge in the field of object detection lies in the system's performance under non-ideal imaging conditions, such as rain, fog, low illumination, or raw Bayer images that lack ISP processing. Our study introduces "Feature Corrective Transfer Learning", a novel approach that leverages transfer learning and a bespoke loss function to facilitate the end-to-end detection of objects in these challenging scenarios without the need to convert non-ideal images into their RGB counterparts. In our methodology, we initially train a comprehensive model on a pristine RGB image dataset. Subsequently, non-ideal images are processed by comparing their feature maps against those from the initial ideal RGB model. This comparison employs the Extended Area Novel Structural Discrepancy Loss (EANSDL), a novel loss function designed to quantify similarities and integrate them into the detection loss. This approach refines the model's ability to perform object detection across varying conditions through direct feature map correction, encapsulating the essence of Feature Corrective Transfer Learning. Experimental validation on variants of the KITTI dataset demonstrates a significant improvement in mean Average Precision (mAP), resulting in a 3.8-8.1% relative enhancement in detection under non-ideal conditions compared to the baseline model, and a less marginal performance difference within 1.3% of the mAP@[0.5:0.95] achieved under ideal conditions by the standard Faster RCNN algorithm. 
    more » « less
  4. Reliable prediction of vehicle trajectories at signalized intersections is crucial to urban traffic management and autonomous driving systems. However, it presents unique challenges, due to the complex roadway layout at intersections, involvement of traffic signal controls, and interactions among different types of road users. To address these issues, we present in this paper a novel model called Knowledge-Informed Generative Adversarial Network (KI-GAN), which integrates both traffic signal information and multi-vehicle interactions to predict vehicle trajectories accurately. Additionally, we propose a specialized attention pooling method that accounts for vehicle orientation and proximity at intersections. Based on the SinD dataset, our KI-GAN model is able to achieve an Average Displacement Error (ADE) of 0.05 and a Final Displacement Error (FDE) of 0.12 for a 6-second observation and 6-second prediction cycle. When the prediction window is extended to 9 seconds, the ADE and FDE values are further reduced to 0.11 and 0.26, respectively. These results demonstrate the effectiveness of the proposed KI-GAN model in vehicle trajectory prediction under complex scenarios at signalized intersections, which represents a significant advancement in the target field. 
    more » « less
  5. In this paper, we examine the problem of the Dilemma Zone (DZ) in depth, weaving together the various influences that span the environment, the ego-vehicle, and ultimately the characteristics of the driver. Driver behavior in dilemma zone situations is crucial, and more research is urgently needed in this area. The journey through various modeling approaches and data acquisition techniques sheds new light on driver behavior within the dilemma zone context. Our thorough examination of the current research landscape has revealed that several significant areas remain overlooked. As well as the dynamic impact of vehicles, vehicle interactions, and a strong tendency to over-rely on infrastructure information, there are also concerns about the lack of comprehensive evaluation tools. However, we do not see these gaps as stumbling blocks, but rather as steppingstones for future research opportunities. A more focused study of cooperative solutions is required considering the potential of personalized modeling, the untapped power of machine learning techniques, and the importance of personalized modeling. It is our hope that by embracing innovative approaches that can capture and simulate personalized behavioral data using “everything-in-the-loop” simulations, future research endeavors will be guided. To effectively mitigate the DZ problem, we also point out the research gaps and opportunities for further research in the DZ. 
    more » « less
  6. Despite numerous studies on trajectory prediction, existing approaches often fail to adequately capture the multifaceted and individual nature of driving behavior. In recognition of this gap and based on DenseTNT, an end-to-end and goal-based trajectory prediction method, our study developed a new version of DenseTNT that incorporates personalized nodes within the graph neural network in VectorNet as context encoder. Throughout the neural network computations, these nodes represent individual driver labels, allowing a more granular understanding of diverse driving behaviors to be gained. Based on comparative analysis, our model has a 11.4% reduction in minADE when compared to baseline models that do not have personalized labels. 
    more » « less
  7. The rapid progress in intelligent vehicle technology has led to a significant reliance on computer vision and deep neural networks (DNNs) to improve road safety and driving experience. However, the image signal processing (ISP) steps required for these networks, including demosaicing, color correction, and noise reduction, increase the overall processing time and computational resources. To address this, our paper proposes an improved version of the Faster R-CNN algorithm that integrates camera parameters into raw image input, reducing dependence on complex ISP steps while enhancing object detection accuracy. Specifically, we introduce additional camera parameters, such as ISO speed rating, exposure time, focal length, and F-number, through a custom layer into the neural network. Further, we modify the traditional Faster R-CNN model by adding a new fully connected layer, combining these parameters with the original feature maps from the backbone network. Our proposed new model, which incorporates camera parameters, has a 4.2% improvement in mAP@[0.5,0.95] compared to the traditional Faster RCNN model for object detection tasks on raw image data. 
    more » « less